Software Defect Prediction Based on Optimized Machine Learning Models: A Comparative Study
DOI:
https://doi.org/10.34148/teknika.v12i2.634Keywords:
Machine Learning Models, Software Defect Prediction, Random Search, Principal Component Analysis, Hyperparameter TuningAbstract
Software defect prediction is crucial used for detecting possible defects in software before they manifest. While machine learning models have become more prevalent in software defect prediction, their effectiveness may vary based on the dataset and hyperparameters of the model. Difficulties arise in determining the most suitable hyperparameters for the model, as well as identifying the prominent features that serve as input to the classifier. This research aims to evaluate various traditional machine learning models that are optimized for software defect prediction on NASA MDP (Metrics Data Program) datasets. The datasets were classified using k-nearest neighbors (k-NN), decision trees, logistic regression, linear discriminant analysis (LDA), single hidden layer multilayer perceptron (SHL-MLP), and Support Vector Machine (SVM). The hyperparameters of the models were fine-tuned using random search, and the feature dimensionality was decreased by utilizing principal component analysis (PCA). The synthetic minority oversampling technique (SMOTE) was implemented to oversample the minority class in order to correct the class imbalance. k-NN was found to be the most suitable for software defect prediction on several datasets, while SHL-MLP and SVM were also effective on certain datasets. It is noteworthy that logistic regression and LDA did not perform as well as the other models. Moreover, the optimized models outperform the baseline models in terms of classification accuracy. The choice of model for software defect prediction should be based on the specific characteristics of the dataset. Furthermore, hyperparameter tuning can improve the accuracy of machine learning models in predicting software defects.
Downloads
References
M. K. Thota, F. H. Shajin, and P. Rajesh, “Survey on software defect prediction techniques,” Int. J. Appl. Sci. Eng., vol. 17, no. 4, pp. 331—344, 2020.
P. Roy, G. S. Mahapatra, P. Rani, S. K. Pandey, and K. N. Dey, “Robust feedforward and recurrent neural network based dynamic weighted combination models for software reliability prediction,” Appl. Soft Comput., vol. 22, pp. 629—637, 2014.
Z. Xu et al., “Software defect prediction based on kernel PCA and weighted extreme learning machine,” Inf. Softw. Technol., vol. 106, pp. 182—200, 2019.
T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, “A systematic literature review on fault prediction performance in software engineering,” IEEE Trans. Softw. Eng., vol. 38, no. 6, pp. 1276—1304, 2011.
A. Iqbal et al., “Performance analysis of machine learning techniques on software defect prediction using NASA datasets,” Int. J. Adv. Comput. Sci. Appl., vol. 10, no. 5, 2019.
Z. Marian, I.-G. Mircea, I.-G. Czibula, and G. Czibula, “A novel approach for software defect prediction using fuzzy decision trees,” in 2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), 2016, pp. 240—247.
H. Ji, S. Huang, Y. Wu, Z. Hui, and C. Zheng, “A new weighted naive Bayes method based on information diffusion for software defect prediction,” Softw. Qual. J., vol. 27, no. 3, pp. 923—968, 2019.
M. Hammad, A. Alqaddoumi, and H. Al-Obaidy, “Predicting software faults based on k-nearest neighbors classification,” Int. J. Comput. Digit. Syst., vol. 8, no. 5, pp. 462—467, 2019.
P. Kumar and S. K. Singh, “Defect prediction model for aop-based software development using hybrid fuzzy c-means with genetic algorithm and k-nearest neighbors classifier,” Int. J. Appl. Inf. Syst, vol. 11, no. 2, pp. 26—30, 2016.
R. Jayanthi and L. Florence, “Software defect prediction techniques using metrics based on neural network classifier,” Cluster Comput., vol. 22, pp. 77—88, 2019.
X. Rong, F. Li, and Z. Cui, “A model for software defect prediction using support vector machine based on CBA,” Int. J. Intell. Syst. Technol. Appl., vol. 15, no. 1, pp. 19—34, 2016.
S. Liu, X. Chen, W. Liu, J. Chen, Q. Gu, and D. Chen, “FECAR: A feature selection framework for software defect prediction,” in 2014 IEEE 38th Annual Computer Software and Applications Conference, 2014, pp. 426—435.
A. O. Balogun, S. Basri, S. J. Abdulkadir, and A. S. Hashim, “Performance analysis of feature selection methods in software defect prediction: a search method approach,” Appl. Sci., vol. 9, no. 13, p. 2764, 2019.
A. O. Balogun et al., “Impact of feature selection methods on the predictive performance of software defect prediction models: an extensive empirical study,” Symmetry (Basel)., vol. 12, no. 7, p. 1147, 2020.
I. T. Jolliffe, Principal Component Analysis, Ke-2. Springer, 2011.
M. Shepperd, Q. Song, Z. Sun, and C. Mair, “Data quality: Some comments on the nasa software defect datasets,” IEEE Trans. Softw. Eng., vol. 39, no. 9, pp. 1208—1215, 2013.
J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Mach. Learn. Res., vol. 13, no. 1, pp. 281—305, 2012.
N. V Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” J. Artif. Intell. Res., vol. 16, pp. 321—357, 2002.
D. Gray, D. Bowes, N. Davey, Y. Sun, and B. Christianson, “Reflections on the NASA MDP data sets,” IET Softw., vol. 6, no. 6, pp. 549—558, 2012.
J. Siswantoro, H. Arwoko, and M. Widiasri, “Indonesian fruits classification from image using MPEG-7 descriptors and ensemble of simple classifiers,” J. Food Process Eng., vol. 43, no. 7, pp. 1—13, 2020, doi: 10.1111/jfpe.13414.
C. M. Bishop, Pattern recognition and machine learning. springer, 2006.
E. Alpaydin, Introduction to machine learning, 2nd ed. Cambridge, Massachusetts: MIT press, 2014.
W. McKinney, “Data structures for statistical computing in python,” in Proceedings of the 9th Python in Science Conference, 2010, vol. 445, no. 1, pp. 51—56.
G. Lemaître, F. Nogueira, and C. K. Aridas, “Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning,” J. Mach. Learn. Res., vol. 18, no. 1, pp. 559—563, 2017.
F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825—2830, Oct. 2011.